Both target-specific and domain-invariant features can facilitate Open Set Domain Adaptation (OSDA). To exploit these features, we propose a Knowledge Exchange (KnowEx) model which jointly trains two complementary constituent networks (1) a Domain-Adversarial Network (DAdvNet) learning the domain-invariant representation, through which the supervision in source domain can be exploited to infer the class information of unlabeled target data; (2) a Private Network (PrivNet) exclusive for target domain, which is beneficial for discriminating between instances from known and unknown classes. The two constituent networks exchange training experience in the learning process. Toward this end, we exploit an adversarial perturbation process against DAdvNet to regularize PrivNet. This enhances the complementarity between the two networks. At the same time, we incorporate an adaptation layer into DAdvNet to address the unreliability of the PrivNet's experience. Therefore, DAdvNet and PrivNet are able to mutually reinforce each other during training. We have conducted thorough experiments on multiple standard benchmarks to verify the effectiveness and superiority of KnowEx in OSDA.The Coarse-To-Fine (CTF) matching scheme has been widely applied to reduce computational complexity and matching ambiguity in stereo matching and optical flow tasks by converting image pairs into multi-scale representations and performing matching from coarse to fine levels. Despite its efficiency, it suffers from several weaknesses, such as tending to blur the edges and miss small structures like thin bars and holes. We find that the pixels of small structures and edges are often assigned with wrong disparity/flow in the upsampling process of the CTF framework, introducing errors to the fine levels and leading to such weaknesses. We observe that these wrong disparity/flow values can be avoided if we select the best-matched value among their neighborhood, which inspires us to propose a novel differentiable Neighbor-Search Upsampling (NSU) module. The NSU module first estimates the matching scores and then selects the best-matched disparity/flow for each pixel from its neighbors. It effectively preserves finer structure details by exploiting the information from the finer level while upsampling the disparity/flow. The proposed module can be a drop-in replacement of the naive upsampling in the CTF matching framework and allows the neural networks to be trained end-to-end. By integrating the proposed NSU module into a baseline CTF matching network, we design our Detail Preserving Coarse-To-Fine (DPCTF) matching network. Comprehensive experiments demonstrate that our DPCTF can boost performances for both stereo matching and optical flow tasks. Notably, our DPCTF achieves new state-of-the-art performances for both tasks - it outperforms the competitive baseline (Bi3D) by 28.8% (from 0.73 to 0.52) on EPE of the FlyingThings3D stereo dataset, and ranks first in KITTI flow 2012 benchmark. The code is available at https//github.com/Deng-Y/DPCTF.Deep learning has recently been intensively studied in the context of image compressive sensing (CS) to discover and represent complicated image structures. These approaches, however, either suffer from nonflexibility for an arbitrary sampling ratio or lack an explicit deep-learned regularization term. This paper aims to solve the CS reconstruction problem by combining the deep-learned regularization term and proximal operator. We first introduce a regularization term using a carefully designed residual-regressive net, which can measure the distance between a corrupted image and a clean image set and accurately identify to which subspace the corrupted image belongs. We then address a proximal operator with a tailored dilated residual channel attention net, which enables the learned proximal operator to map the distorted image into the clean image set. We adopt an adaptive proximal selection strategy to embed the network into the loop of the CS image reconstruction algorithm. Moreover, a self-ensemble strategy is presented to improve CS recovery performance. We further utilize state evolution to analyze the effectiveness of the designed networks. Extensive experiments also demonstrate that our method can yield superior accurate reconstruction (PSNR gain over 1 dB) compared to other competing approaches while achieving the current state-of-the-art image CS reconstruction performance. The test code is available at https//github.com/zjut-gwl/CSDRCANet.Three-dimensional face dense alignment and reconstruction in the wild is a challenging problem as partial facial information is commonly missing in occluded and large pose face images. Large head pose variations also increase the solution space and make the modeling more difficult. Our key idea is to model occlusion and pose to decompose this challenging task into several relatively more manageable subtasks. To this end, we propose an end-to-end framework, termed as Self-aligned Dual face Regression Network (SADRNet), which predicts a pose-dependent face, a pose-independent face. They are combined by an occlusion-aware self-alignment to generate the final 3D face. https://www.selleckchem.com/products/AT7867.html Extensive experiments on two popular benchmarks, AFLW2000-3D and Florence, demonstrate that the proposed method achieves significant superior performance over existing state-of-the-art methods.This work addresses the challenging problem of reflection symmetry detection in unconstrained environments. Starting from the understanding on how the visual cortex manages planar symmetry detection, it is proposed to treat the problem in two stages i) the design of a stable metric that extracts subsets of consistently oriented candidate segments, whenever the underlying 2D signal appearance exhibits definite near symmetric correspondences; ii) the ranking of such segments on the basis of the surrounding gradient orientation specularity, in order to reflect real symmetric object boundaries. Since these operations are related to the way the human brain performs planar symmetry detection, a better correspondence can be established between the outcomes of the proposed algorithm and a human-constructed ground truth. When compared to the testing sets used in recent symmetry detection competitions, a remarkable performance gain can be observed. In additional, further validation has been achieved by conducting perceptual validation experiments with users on a newly built dataset.